Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Autism Res ; 16(11): 2110-2124, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37823568

RESUMEN

The fluent processing of faces can be challenging for autistic individuals. Here, we assessed the neural sensitivity to rapid changes in subtle facial cues in 23 autistic men and 23 age and IQ matched non-autistic (NA) controls using frequency-tagging electroencephalography (EEG). In oddball paradigms examining the automatic and implicit discrimination of facial identity and facial expression, base rate images were presented at 6 Hz, periodically interleaved every fifth image with an oddball image (i.e. 1.2 Hz oddball frequency). These distinctive frequency tags for base rate and oddball stimuli allowed direct and objective quantification of the neural discrimination responses. We found no large differences in the neural sensitivity of participants in both groups, not for facial identity discrimination, nor for facial expression discrimination. Both groups also showed a clear face-inversion effect, with reduced brain responses for inverted versus upright faces. Furthermore, sad faces generally elicited significantly lower neural amplitudes than angry, fearful and happy faces. The only minor group difference is the larger involvement of high-level right-hemisphere visual areas in NA men for facial expression processing. These findings are discussed from a developmental perspective, as they strikingly contrast with robust face processing deficits observed in autistic children using identical EEG paradigms.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Reconocimiento Facial , Masculino , Niño , Humanos , Adulto , Expresión Facial , Discriminación en Psicología/fisiología , Estimulación Luminosa/métodos , Electroencefalografía/métodos , Reconocimiento Facial/fisiología
2.
Brain Sci ; 13(2)2023 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-36831705

RESUMEN

Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...